65 research outputs found

    Universal Organization of Resting Brain Activity at the Thermodynamic Critical Point

    Get PDF
    Thermodynamic criticality describes emergent phenomena in a wide variety of complex systems. In the mammalian brain, the complex dynamics that spontaneously emerge from neuronal interactions have been characterized as neuronal avalanches, a form of critical branching dynamics. Here, we show that neuronal avalanches also reflect that the brain dynamics are organized close to a thermodynamic critical point. We recorded spontaneous cortical activity in monkeys and humans at rest using high-density intracranial microelectrode arrays and magnetoencephalography, respectively. By numerically changing a control parameter equivalent to thermodynamic temperature, we observed typical critical behavior in cortical activities near the actual physiological condition, including the phase transition of an order parameter, as well as the divergence of susceptibility and specific heat. Finite-size scaling of these quantities allowed us to derive robust critical exponents highly consistent across monkey and humans that uncover a distinct, yet universal organization of brain dynamics

    An Information Maximization Approach to Overcomplete and Recurrent Representations

    Get PDF
    The principle of maximizing mutual information is applied to learning overcomplete and recurrent representations. The underlying model consists of a network of input units driving a larger number of output units with recurrent interactions. In the limit of zero noise, the network is deterministic and the mutual information can be related to the entropy of the output units. Maximizing this entropy with respect to both the feedforward connections as well as the recurrent interactions results in simple learning rules for both sets of parameters. The conventional independent components (ICA) learning algorithm can be recovered as a special case where there is an equal number of output units and no recurrent connections. The application of these new learning rules is illustrated on a simple two-dimensional input example

    The emergence of synaesthesia in a Neuronal Network Model via changes in perceptual sensitivity and plasticity

    Get PDF
    Synaesthesia is an unusual perceptual experience in which an inducer stimulus triggers a percept in a different domain in addition to its own. To explore the conditions under which synaesthesia evolves, we studied a neuronal network model that represents two recurrently connected neural systems. The interactions in the network evolve according to learning rules that optimize sensory sensitivity. We demonstrate several scenarios, such as sensory deprivation or heightened plasticity, under which synaesthesia can evolve even though the inputs to the two systems are statistically independent and the initial cross-talk interactions are zero. Sensory deprivation is the known causal mechanism for acquired synaesthesia and increased plasticity is implicated in developmental synaesthesia. The model unifies different causes of synaesthesia within a single theoretical framework and repositions synaesthesia not as some quirk of aberrant connectivity, but rather as a functional brain state that can emerge as a consequence of optimising sensory information processing

    Fast Coding of Orientation in Primary Visual Cortex

    Get PDF
    Understanding how populations of neurons encode sensory information is a major goal of systems neuroscience. Attempts to answer this question have focused on responses measured over several hundred milliseconds, a duration much longer than that frequently used by animals to make decisions about the environment. How reliably sensory information is encoded on briefer time scales, and how best to extract this information, is unknown. Although it has been proposed that neuronal response latency provides a major cue for fast decisions in the visual system, this hypothesis has not been tested systematically and in a quantitative manner. Here we use a simple ‘race to threshold’ readout mechanism to quantify the information content of spike time latency of primary visual (V1) cortical cells to stimulus orientation. We find that many V1 cells show pronounced tuning of their spike latency to stimulus orientation and that almost as much information can be extracted from spike latencies as from firing rates measured over much longer durations. To extract this information, stimulus onset must be estimated accurately. We show that the responses of cells with weak tuning of spike latency can provide a reliable onset detector. We find that spike latency information can be pooled from a large neuronal population, provided that the decision threshold is scaled linearly with the population size, yielding a processing time of the order of a few tens of milliseconds. Our results provide a novel mechanism for extracting information from neuronal populations over the very brief time scales in which behavioral judgments must sometimes be made

    Optimal Information Representation and Criticality in an Adaptive Sensory Recurrent Neuronal Network.

    No full text
    Recurrent connections play an important role in cortical function, yet their exact contribution to the network computation remains unknown. The principles guiding the long-term evolution of these connections are poorly understood as well. Therefore, gaining insight into their computational role and into the mechanism shaping their pattern would be of great importance. To that end, we studied the learning dynamics and emergent recurrent connectivity in a sensory network model based on a first-principle information theoretic approach. As a test case, we applied this framework to a model of a hypercolumn in the visual cortex and found that the evolved connections between orientation columns have a "Mexican hat" profile, consistent with empirical data and previous modeling work. Furthermore, we found that optimal information representation is achieved when the network operates near a critical point in its dynamics. Neuronal networks working near such a phase transition are most sensitive to their inputs and are thus optimal in terms of information representation. Nevertheless, a mild change in the pattern of interactions may cause such networks to undergo a transition into a different regime of behavior in which the network activity is dominated by its internal recurrent dynamics and does not reflect the objective input. We discuss several mechanisms by which the pattern of interactions can be driven into this supercritical regime and relate them to various neurological and neuropsychiatric phenomena

    Rate models for conductance-based cortical neuronal networks.

    No full text
    Population rate models provide powerful tools for investigating the principles that underlie the cooperative function of large neuronal systems. However, biophysical interpretations of these models have been ambiguous. Hence, their applicability to real neuronal systems and their experimental validation have been severely limited. In this work, we show that conductance-based models of large cortical neuronal networks can be described by simplified rate models, provided that the network state does not possess a high degree of synchrony. We first derive a precise mapping between the parameters of the rate equations and those of the conductance-based network models for time-independent inputs. This mapping is based on the assumption that the effect of increasing the cell's input conductance on its f-I curve is mainly subtractive. This assumption is confirmed by a single compartment Hodgkin-Huxley type model with a transient potassium A-current. This approach is applied to the study of a network model of a hypercolumn in primary visual cortex. We also explore extensions of the rate model to the dynamic domain by studying the firing-rate response of our conductance-based neuron to time-dependent noisy inputs. We show that the dynamics of this response can be approximated by a time-dependent second-order differential equation. This phenomenological single-cell rate model is used to calculate the response of a conductance-based network to time-dependent inputs

    Can a time varying external drive give rise to apparent criticality in neural systems? - Fig 2

    No full text
    <p>(Color) Avalanche size and duration distributions for three example processes, as exemplified in the raster plots above, all with the same mean rate: <b>A&B</b>. homogeneous Poisson process, <b>C&D</b>. inhomogeneous Poisson process, <b>E&F</b>. critical branching process (BP). Different colors represent different bin sizes, Δ<i>t</i>, at <i>r</i> = 1 (or equivalently different rates <i>r</i> at Δ<i>t</i> = 1). Colored lines or dots are numerical results; black lines are analytical results. <b>A-D.</b> For both the homogeneous and the inhomogeneous Poisson processes, an increase in Δ<i>t</i> (or <i>r</i>) makes the size distribution <i>P</i><sub>S</sub>(<i>s</i>) and the duration distribution <i>P</i><sub><i>D</i></sub>(<i>d</i>) flatter. <b>E&F.</b> For the critical system, a change in Δ<i>t</i> (or <i>r</i>) hardly changes <i>P</i><sub>S</sub>(<i>s</i>), which shows a power law with exponent -1.5 (dashed). The slope of <i>P</i><sub><i>D</i></sub><i>(d</i>) changes systematically, because <i>d</i> is in units of bins. In units of time steps, <i>P</i><sub><i>D</i></sub><i>(d</i>) would also change very little and show the exponent -2 (dashed).</p
    • …
    corecore